Search Results: "Tollef Fog Heen"

5 October 2011

Tollef Fog Heen: The SugarCRM rest interface

We use SugarCRM at work and I've complained about its not-very-RESTy REST interface. John Mertic a (the?) SugarCRM Community Manager asked me about what problems I'd had (apart from its lack of RESTfulness) and I said I'd write a blog post about it. In our case, the REST interface is used to integrate Sugar and RT so we get a link in both interfaces to jump from opportunities to the corresponding RT ticket (and back again). This should be a fairly trivial exercise or so you would think. The problems, as I see it are: My first gripe is the complete lack of REST in the URLs. Everything is just sent to https://sugar/service/v2/rest.php. Usually a POST, but sometimes a GET. It's not documented what to use where. The POST parameters we send when logging in are:
method=>"login"
input_type=>"JSON"
response_type=>"JSON"
rest_data=>json($params)
$params is a hash as follows:
user_auth =>  
            user_name => $USERNAME,
            password => $PW,
            version => "1.2",
 ,
application => "foo",
Nothing seems to actually care about the value of application, nor about the user_auth.version value. The password is the md5 of the actual password, hex encoded. I'm not sure why it is, as this adds absolutely no security, but it is. This is also not properly documented. This gives us a JSON object back with a somewhat haphazard selection of attributes (reformatted here for readability):
 
     "id":"<hex session id>,
     "module_name":"Users",
     "name_value_list":  
             "user_id":  
                     "name":"user_id",
                     "value":"1"
              ,
             "user_name":  
                     "name":"user_name",
                     "value":"<username>"
              ,
             "user_language":  
                     "name":"user_language",
                     "value":"en_us"
              ,
             "user_currency_id":  
                     "name":"user_currency_id",
                 "value":"-99"
              ,
             "user_currency_name":  
                     "name":"user_currency_name",
                     "value":"Euro"
              
      
 
What is the module_name? No real idea. In general, when you get back an id and a module_name field, it tells you that the id exists is an object that exists in the context of the given module. Not here, since the session id is not a user. The worst here is the name_value_list concept which is used all over the REST interface. First, it's not a list, it's a hash. Secondly, I have no idea what would be wrong by just using keys directly in the top level object, so the object would have looked somewhat like:
 
     "id":"<hex session id>,
     "user_id": 1,
     "user_name": "<username>,
     "user_language":"en_us",
     "user_currency_id": "-99",
     "user_currency_name": "Euro"
 
Some people might argue that since you can have custom field names this can cause clashes. Except, it can't, since they're all suffixed with _c. So we're now logged in and can fetch all opportunities. This we do by posting:
method=>"get_entry_list",
input_type=>"JSON",
response_type=>"JSON",
rest_data=>to_json([
            $sid,
            $module,
            $where,
            "",
            $next,
            $fields,
            $links,
            1000
])
Why this is a list rather than a hash? Again, I don't know. A hash would make more sense to me. The resulting JSON looks like:
 
    "result_count" : 16,
    "relationship_list" : [],
    "entry_list" : [
        
          "name_value_list" :  
             "rt_status_c" :  
                "value" : "resolved",
                "name" : "rt_status_c"
              ,
             [ ]
           ,
          "module_name" : "Opportunities",
          "id" : "<entry_uuid>"
        ,
       [ ]
    ],
    "next_offset" : 16
 
Now, entry_list actually is a list here, which is good and all, but there's still the annoying name_value_list concept. Last, we want to update the record in Sugar, to do this we do:
method=>"set_entry",
input_type=>"JSON",
response_type=>"JSON",
rest_data=>to_json([
    $sid,
    "Opportunities",
    $fields
])
$fields is not a name_value_list, but instead is:
 
    "rt_status_c" : "resolved",
    "id" : "<status text>"
 
Why this works and my attempts at using a proper name_value_list didn't work? I have no idea. I think that pretty much sums it up. I'm sure there are other problems in there (such as the over 100 lines of support code for the about 20 lines of actual code that does useful work), though.

31 August 2011

Tollef Fog Heen: Bizarre slapd (and gnutls) failures

Just this morning, I was setting up TLS on a LDAP host, but slapd refused to start afterwards with a bizarre error message:
TLS init def ctx failed: -207
The key and certificate was freshly generated using openssl on my laptop (running wheezy, so OpenSSL 1.0.0d-3). After a bit of googling, I discovered that -207 is gnutls-esque for "Base64 error". Of course, the key looks just fine and decodes fine using base64, openssl base64 and even gnutls's own certtool. Now, certtool also spits out what it considers the right base64 version of the key and I noticed it differed. Using the one certtool output seems to work, though, so if you ever run into this problem try running the key through certtool --infile foo.pem -k and use the base64 representation it outputs.

10 August 2011

Tollef Fog Heen: Test post

Sorry about this, debugging planet.debian.

3 August 2011

Tollef Fog Heen: libvmod_curl using cURL from inside Varnish Cache

It's sometimes necessary to be able to access HTTP resources from inside VCL. Some use cases include authentication or authorization where a service validates a token and then tell Varnish whether to proceed or not. To do this, we recently implemented libvmod_curl which is a set of cURL bindings for VCL so you can fetch remote resource easily. HTTP would be the usual method, but cURL also supports other protocols such as LDAP or POP3. The API is very simple, to use it you would do something like:
require curl;
sub vcl_recv  
    curl.fetch("http://authserver/validate?key=" + regsub(req.url, ".*key=([a-z0-9]+), "\1"));
    if (curl.status() != 200)  
        error 403 "Go away";
     
 
Other methods you can use are curl.header(headername) to get the contents of a given header and curl.body() to get the body of the response. See the README file in the source for more information.

21 May 2011

Tollef Fog Heen: Upgrading Alioth

A while ago, we got another machine for hosting Alioth and so we started thinking about how to use that machine. It's a used machine and not massively faster than the current hardware, so just moving everything over wouldn't actually get us that much of a performance upgrade. However, Alioth is using FusionForge, which is supposed to be able to run on a cluster of machines. After all, this was originally built for SourceForge.net, which certainly does not run on a single host. So, a split of services is what we'll do. This weekend, we're having a sprint in Collabora's office in Cambridge, actually implementing the split and doing a bit of general planning for the future. Last afternoon (Friday), European time, we started the migration. The first step is to move all the data off the Xen guest on wagner, where Alioth is currently hosted. This finished a few minutes ago; it turns out syncing about 8.5 million files across almost 400G of data takes a little while. The new host is called vasks and will host the database, run the main apache and be the canonical location for the various SCM repositories. We are not decomissioning wagner, but it'll be reinstalled without Xen or other virtualisation which should help performance a bit. It'll host everything that has lower performance requirements such as cron jobs, mailing lists and so on. I'll try to keep you all updated and feel free to drop by #alioth on irc.debian.org if you have any questions.

15 March 2011

Colin Watson: Wubi bug 693671

I spent most of last week working on Ubuntu bug 693671 ("wubi install will not boot - phase 2 stops with: Try (hd0,0): NTFS5"), which was quite a challenge to debug since it involved digging into parts of the Wubi boot process I'd never really touched before. Since I don't think much of this is very well-documented, I'd like to spend a bit of time explaining what was involved, in the hope that it will help other developers in the future. Wubi is a system for installing Ubuntu into a file in a Windows filesystem, so that it doesn't require separate partitions and can be uninstalled like any other Windows application. The purpose of this is to make it easy for Windows users to try out Ubuntu without the need to worry about repartitioning, before they commit to a full installation. Wubi started out as an external project, and initially patched the installer on the fly to do all the rather unconventional things it needed to do; we integrated it into Ubuntu 8.04 LTS, which involved turning these patches into proper installer facilities that could be accessed using preseeding, so that Wubi only needs to handle the Windows user interface and other Windows-specific tasks. Anyone familiar with a GNU/Linux system's boot process will immediately see that this isn't as simple as it sounds. Of course, ntfs-3g is a pretty solid piece of software so we can handle the Windows filesystem without too much trouble, and loopback mounts are well-understood so we can just have the initramfs loop-mount the root filesystem. Where are you going to get the kernel and initramfs from, though? Well, we used to copy them out to the NTFS filesystem so that GRUB could read them, but this was overly complicated and error-prone. When we switched to GRUB 2, we could instead use its built-in loopback facilities, and we were able to simplify this. So all was more or less well, except for the elephant in the room. How are you going to load GRUB? In a Wubi installation, NTLDR (or BOOTMGR in Windows Vista and newer) still owns the boot process. Ubuntu is added as a boot menu option using BCDEdit. You might then think that you can just have the Windows boot loader chain-load GRUB. Unfortunately, NTLDR only loads 16 sectors - 8192 bytes - from disk. GRUB won't fit in that: the smallest core.img you can generate at the moment is over 18 kilobytes. Thus, you need something that is small enough to be loaded by NTLDR, but that is intelligent enough to understand NTFS to the point where it can find a particular file in the root directory of a filesystem, load boot loader code from it, and jump to that. The answer for this was GRUB4DOS. Most of GRUB4DOS is based on GRUB Legacy, which is not of much interest to us any more, but it includes an assembly-language program called GRLDR that supports doing this very thing for FAT, NTFS, and ext2. In Wubi, we build GRLDR as wubildr.mbr, and build a specially-configured GRUB core image as wubildr. Now, the messages shown in the bug report suggested a failure either within GRLDR or very early in GRUB. The first thing I did was to remember that GRLDR has been integrated into the grub-extras ntldr-img module suitable for use with GRUB 2, so I tried building wubildr.mbr from that; no change, but this gave me a modern baseline to work on. OK; now to try QEMU (you can use tricks like qemu -hda /dev/sda if you're very careful not to do anything that might involve writing to the host filesystem from within the guest, such as recursively booting your host OS ... [update: Tollef Fog Heen and Zygmunt Krynicki both point out that you can use the -snapshot option to make this safer]). No go; it hung somewhere in the middle of NTLDR. Still, I could at least insert debug statements, copy the built wubildr.mbr over to my test machine, and reboot for each test, although it would be slow and tedious. Couldn't I? Well, yes, I mostly could, but that 8192-byte limit came back to bite me, along with an internal 2048-byte limit that GRLDR allocates for its NTFS bootstrap code. There were only a few spare bytes. Something like this would more or less fit, to print a single mark character at various points so that I could see how far it was getting:
	pushal
	xorw	%bx, %bx	/* video page 0 */
	movw	$0x0e4d, %ax	/* print 'M' */
	int	$0x10
	popal
In a few places, if I removed some code I didn't need on my test machine (say, CHS compatibility), I could even fit in cheap and nasty code to print a single register in hex (as long as you didn't mind 'A' to 'F' actually being ':' to '?' in ASCII; and note that this is real-mode code, so the loop counter is %cx not %ecx):
	/* print %edx in dumbed-down hex */
	pushal
	xorw	%bx, %bx
	movb	$0xe, %ah
	movw	$8, %cx
1:
	roll	$4, %edx
	movb	%dl, %al
	andb	$0xf, %al
	int	$0x10
	loop	1b
	popal
After a considerable amount of work tracking down problems by bisection like this, I also observed that GRLDR's NTFS code bears quite a bit of resemblance in its logical flow to GRUB 2's NTFS module, and indeed the same person wrote much of both. Since I knew that the latter worked, I could use it to relieve my brain of trying to understand assembly code logic directly, and could compare the two to look for discrepancies. I did find a few of these, and corrected a simple one. Testing at this point suggested that the boot process was getting as far as GRUB but still wasn't printing anything. I removed some Ubuntu patches which quieten down GRUB's startup: still nothing - so I switched my attentions to grub-core/kern/i386/pc/startup.S, which contains the first code executed from GRUB's core image. Code before the first call to real_to_prot (which switches the processor into protected mode) succeeded, while code after that point failed. Even more mysteriously, code added to real_to_prot before the actual switch to protected mode failed too. Now I was clearly getting somewhere interesting, but what was going on? What I really wanted was to be able to single-step, or at least see what was at the memory location it was supposed to be jumping to. Around this point I was venting on IRC, and somebody asked if it was reproducible in QEMU. Although I'd tried that already, I went back and tried again. Ubuntu's qemu is actually built from qemu-kvm, and if I used qemu -no-kvm then it worked much better. Excellent! Now I could use GDB:
(gdb) target remote   qemu -gdb stdio -no-kvm -hda /dev/sda
This let me run until the point when NTLDR was about to hand over control, then interrupt and set a breakpoint at 0x8200 (the entry point of startup.S). This revealed that the address that should have been real_to_prot was in fact garbage. I set a breakpoint at 0x7c00 (GRLDR's entry point) and stepped all the way through to ensure it was doing the right thing. In the process it was helpful to know that GDB and QEMU don't handle real mode very well between them. Useful tricks here were: Single-stepping showed that GRLDR was loading the entirety of wubildr correctly and jumping to it. The first instruction it jumped to wasn't in startup.S, though, and then I remembered that we prefix the core image with grub-core/boot/i386/pc/lnxboot.S. Stepping through this required a clear head since it copies itself around and changes segment registers a few times. The interesting part was at real_code_2, where it copies a sector of the kernel to the target load address, and then checks a known offset to find out whether the "kernel" is in fact GRUB rather than a Linux kernel. I checked that offset by hand, and there was the smoking gun. GRUB recently acquired Reed-Solomon error correction on its core image, to allow it to recover from other software writing over sectors in the boot track. This moved the magic number lnxboot.S was checking somewhat further into the core image, after the first sector. lnxboot.S couldn't find it because it hadn't copied it yet! A bit of adjustment and all was well again. The lesson for me from all of this has been to try hard to get an interactive debugger working. Really hard. It's worth quite a bit of up-front effort if it saves you from killing neurons stepping through pages of code by hand. I think the real-mode debugging tricks I picked up should be useful for working on GRUB in the future.

30 November 2010

Tollef Fog Heen: My Varnish is leaking memory

Every so often, we get bug reports about Varnish leaking memory. People have told Varnish to use 20 gigabytes for cache and they discover the process is eating 30 gigabytes of memory and they get confused about what's going on. So, let's take a look. First, a little bit of history. Varnish 2.0 had a fixed per-object workspace which was used for both header manipulations in vcl_fetch as well as for storing the headers of the object when vcl_fetch was done. The default size of this workspace was 8k. If we assume an average object size of 20k, that is almost 1/3 of the store being overhead. With 2.1, this changed. First, vcl_fetch doesn't have obj any longer, it only has beresp which is the backend response. At the end of vcl_fetch, the headers and other relevant bits of the backend response are copied into an object. This means we no longer have a fixed overhead, we use what we need. Of course, we're still subject to malloc's whims when it comes to page sizes and how it actually allocates memory. Less overhead means more objects in the store. More objects in the store, means, everything else being equal, more overhead outside the store (for the hash buckets or critbit tree and other structs). This is where lots of people get confused, since what they see is just Varnish consuming more memory. When moving from 2.0 to 2.1, people should lower their cache size. How much depends on the amount of objects they have, but if they have many and small objects, a significant reduction might be needed. For a machine dedicated to Varnish, we usually recommend making the cache size be 70-75% of the memory of the machine. A reasonable question to ask at this point is what all this overhead is being used for. Part of it is a per-thread overhead. Linux has a 10MB stack size by default, but luckily, most of it isn't allocated, so it only counts against virtual, not resident memory. In addition, we have a hash algorithm which has overhead and the headers from the objects are stored in the object itself and not in the stevedore (object store). Last, but by no means least, we usually see an overhead of around 1k per object, but I have seen up to somewhere above 2k. This doesn't sound like much, but when you're looking at servers with 10 million objects, 1k of overhead means 10 gigabytes of total overhead, leading to the confusion I talked about at the start.

2 November 2010

Tollef Fog Heen: Temperature logging with 1-wire

Last night, I finally got my temperature sensors going, including a nice and shiny munin plugin giving me pretty graphs. So far, I only have a sensor in the loft, but I'll spend some days putting sensors in the rest of the house as well. Robert McQueen asked me on twitter how this all was set up, so I figured I'd blog about it. The sensors I'm using are the DS18B20 ones from Dallas Semiconductor. You can probably buy them from your local electronics supplier, but mine charges around 75 NOK a piece, so I just bought some off Ebay. It takes a bit longer, but I paid about 1/10th the price. For logging, I'm using my NAS, which is just a machine running Debian, an USB to serial adapter and an serial-to-1-wire adapter. Thanks a lot to Martin Bergek for the writeup and the ELFA part numbers for diodes. Since I'm lazy, I ended up just writing a plugin for munin. It uses owfs, which I downloaded from mentors.debian.net. I also offered sponsorship for it, assuming a few small issues are cleaned up, so hopefully you can install using just Debian in the near future. owfs is fairly easy to work with, and the plugin uses the aliased names if you provide aliases, so you can know what the temperature in a given location is, rather than having to remember 64 bit serial numbers.

22 July 2010

Rapha&#235;l Hertzog: Quick news: dpkg, collab-maint, alioth and the future

Dpkg got rid of Perl Let s start with the interesting part and the great news: dpkg 1.15.8 (to be uploaded soon) will no longer need perl! After my changes to rewrite update-alternatives in C, Guillem recently pushed the rewrite of dpkg-divert/mksplit in C. Please test it out (binary package for i386 or .dsc). This is rather exciting news for those who would like to use dpkg in embedded contexts. And it s great to see this completed in time for Squeeze. In Squeeze+1, we might go one step further and merge cdebconf, the C replacement for debconf. I got rid of some recurring administrative tasks I have been administrating the Alioth server since its inception (see the announce I sent in 2003) but I m no longer enjoying the day-to-day administrative work that it represents. That s why I just retired from the team. We recently recruited Tollef Fog Heen so the number of admins is still the same (that said, Alioth could benefit from some more help, if you re a DD and interested, drop a mail to admin@alioth.debian.org or come to #alioth). Same goes for the collab-maint project. I have dealt with hundreds of requests to add new contributors to the project since it s the central repository where all Debian developers have write access and where they put the VCS for their packages that do not belong to a more specialized team. The new administrator that will approve the requests is Xavier Oswald and he s doing the work under the umbrella of the New Maintainer s Front Desk. The future I will continue to spend the same amount of time on Debian, the time freed will quickly be reallocated to other Debian and free software related projects. In fact, I even anticipated a bit by launching Flattr FOSS last week but that s a relatively simple project. :-) The other projects that will never all fit in the freed time: I want to spend more time working on dpkg. I do plan to blog more often too, but I m sure you ll notice that yourself soon. I would like to see my Debian book translated into English (another post coming on the topic sometimes soon). In my dreams, I could even start yet another software project, I have some ideas that I really would like to see implemented but I don t see how that could fit in this year s planning unless I can convince someone else to implement them! Maybe I should blog about them. Flattr this Share/Bookmark 5 comments Support my work

27 March 2010

Tollef Fog Heen: Why I think you should publish your infrastructure

GNOME's current sysadmin team is entirely voluneer-based, but as they are having problems finding enough (trusted) volunteers they are looking at hiring a part-time sysadmin. From looking at the GNOME wiki, it looks like they have had a meeting about the shortage of sysadmins. Citing from the minutes
The biggest problem that we've always had with the maintaining an active sysadmin team is the need for trust. If somebody shows up and wants to help out with a GNOME coding project, then it's easy to build up trust over time. Suggest a project, have the person send patches, review the patches, if the patches are good, eventually give them direct commit access. However, for sysadmin work, we get a lot of people who want to help out, but it's very hard for someone to contribute without being given a "dangerous" level of access to the GNOME systems.
Without having looked very hard, I would guess at the GNOME infrastructure being about as open as most proprietary software projects. There's no way for me, as a third party to take a look at their infrastructure, take a look at their ticket backlog and submit patches for problems. Similarly, their nagios setup is behind a password prompt, so there's no way for me to look at what services often have performance problems, suggest new monitors or point out any servers or services that are not monitored. I'm not saying this to pick on GNOME, and as I'll touch on below, they do seem to mostly do the right thing, and as one of the Freedesktop.org sysadmins, I know we're not any better at least not yet. One way to make it at least somewhat easier to contribute and get involved is to use a tool like Chef or Puppet and publishing the recipes. This won't magically make everything transparent, but it'll be a big step up. Ideally, the recipes should be complete enough that you can bootstrap a working system from them and so easier reproduce the infrastructure and any problems. It seems like GNOME is using puppet, but I couldn't find the recipes. Moving a complete infrastructure from something managed by hand to something managed using automation tools is a fairly big and involved process. However, if you're serious about getting more people involved in your sysadmin team, I think it's one of the more reasonable ways to opening up. It also means that when one of your servers is stolen, catches fire or suffers other catastrophic failure you can rebuild the service much quicker. My last point is to open up your ticket tracker. Most tickets aren't security sensitive, so provide a way for people to mark those tickets that are sensitive as such and make the rest public. The GNOME wiki makes this a bit confusing as it talks a bit about RT, but it seems like they actually use bugzilla for sysadmin tickets and just hide security-sensitive ones.

16 March 2010

Tollef Fog Heen: A small explanation about the yubikey

Russell Coker recently reviewed the Yubikey. The article mentions me, so I figured I'd correct a minor thing and respond to one of the comments. First, the yubikey-server-c is my reimplementation of the Yubikey authentication protocol. Yubico provides two implementations, one in PHP and one in Java, neither which I'm particuarly interesting on building my system security on. Any bugs, misfeatures, etc in the C implementation are mine and mine alone. Barak A. Pearlmutter, one of the commenters on Russell's blog writes: i don t understand. isn t this thing vulnerable to eavesdropping and replaying? even if it has a counter which changes etc, the things it is talking to (web sites) can t know that some generated string is being reused. and it doesn t even have a clock, so these things can be old. The way the Yubikey works is you have a central authentication server. This has a secret shared with the key. Setting this secret is the primary function of the personalisation tool. When you press the button, the key takes its internal state (various counters, uid field, etc) and encrypts this using AES-128. This is then sent to the application you are trying to access, be it Wordpress, SSH or something else. Said application then contacts the authentication server which decrypts the ticket, checks the values of the counters to make sure it's not a replay and responds with OK, bad ticket, replay and various other status codes. Based on this, the application grants or denies access. There are really two places you could attack this: in the communication between the web browser and application or between application and authentication server. Both of those can be secured using SSL. There is no way to use a single yubikey in multiple authentication realms without extra software. To do this, you would have a OpenID provider that uses the Yubikey for authentication, or you could have a Kerberos server with cross-realm trust. As for the PAM modules and other tools so far not being packaged, yes, I know, I might fix it, but the current setup has the bits I use, as I use RADIUS authentication to get services to support both Yubikey and passwords.

15 March 2010

Russell Coker: The Yubikey

Picture of Yubikey Some time ago Yubico were kind enough to send me an evaluation copy of their Yubikey device. I ve finally got around to reviewing it and making deployment plans for buying some more. Above is a picture of my Yubikey on the keyboard of my Thinkpad T61 for scale. The newer keys apparently have a different color in the center of the circular press area and also can be purchased in white plastic. The Yubikey is a USB security token from Yubico [1]. It is a use-based token that connects via the USB keyboard interface (see my previous post for a description of the various types of token [2]). The Yubikey is the only device I know of which uses the USB keyboard interface, it seems that this is an innovation that they invented. You can see in the above picture that the Yubikey skips the metal that is used to surround most USB devices, this probably fails to meet some part of the USB specification but does allow them to make the key less than half as thick as it might otherwise be. Mechanically it seems quite solid. The Yubikey is affordable, unlike some token vendors who don t even advertise prices (if you need to ask then you can t afford it) and they have an online sales site. $US25 for a single key and discounts start when you buy 10. As it seems quite likely that someone who wants such a token will want at least two of them for different authentication domains, different users in one home, or as a backup in case one is lost or broken (although my experiments have shown that Yubikeys are very hardy and will not break easily). The discount rate of $20 will apply if you can find four friends who want to use them (assuming two each), or if you support several relatives (as I do). The next discount rate of $15 applies when you order 100 units, and they advise that customers contact their sales department directly if purchasing more than 500 units so it seems likely that a further discount could be arranged when buying more than 500 units. They accept payment via Paypal as well as credit cards. It seems to me that any Linux Users Group could easily arrange an order for 100 units (that would be 10 people with similar needs to me) and a larger LUG could possibly arrange an order of more than 500 units for a better discount. If an order of 500 can t be arranged then an order of 200 would be a good thing to get half black keys and half white ones you can only buy a pack of 100 in a single color. There is a Wordpress plugin to use Yubikey authentication [3]. It works, but I would be happier if it had an option to accept a Yubikey OR a password (currently it demands both a Yubikey AND a password). I know that this is less secure, but I believe that it s adequate for an account that doesn t have administrative rights. To operate the Yubikey you just insert it into a USB slot and press the button to have it enter the pass code via the USB keyboard interface. The pass code has a prefix that can be used to identify the user so it can replace both the user-name and password fields of course it is technically possible to use one Yubikey for authentication with multiple accounts in which case a user-name would be required. Pressing the Yubikey button causes the pass code to be inserted along with the ENTER key, this can take a little getting used to as a slow web site combined with a habit of pressing ENTER can result in a failed login (at least this has happened to me with Konqueror). As the Yubikey is use-based, it needs a server to track the usage count of each key. Yubico provides source to the server software as well as having their own server available on the net obviously it might be a bad idea to use the Yubico server for remote root access to a server, but for blog posting that is a viable option and saves some effort. If you have multiple sites that may be disconnected then you will either need multiple Yubikeys (at a cost of $20 or $15 each) or you will need to have one Yubikey work with multiple servers. Supporting a single key with multiple authentication servers means that MITM attacks become possible. The full source to the Yubikey utilities is available under the new BSD license. In Debian the base functionality of talking to the Yubikey is packaged as libyubikey0 and libyubikey-dev, the server (for validating Yubi requests via HTTP) is packaged as yubikey-server-c, and the utility for changing the AES key to use your own authentication server is packaged as yubikey-personalization thanks Tollef Fog Heen for packaging all this! The YubiPAM project (a PAM module for Yubikey) is licensed under the GPL [4]. It would be good if this could be packaged for Debian (unfortunately I don t have time to adopt more packages at the moment). There is a new model of Yubikey that has RFID support. They suggest using it for public transport systems where RFID could be used for boarding and the core Yubikey OTP functionality could be used for purchasing tickets. I don t think it s very interesting for typical hobbyist and sysadmin work, but RFID experts such as Jonathan Oxer might disagree with me on this issue.

16 February 2010

Tollef Fog Heen: Upgrading freedesktop.org hosts

I recently upgraded kemper.freedesktop.org to lenny. Collabora are nice enough to sponsor some of my sysadmin work for freedesktop and so making sure we are actually running a supported distribution was a good start. The actual dist-upgrade went fine, but when I rebooted with a 2.6.26 kernel, it just hung in the early boot phase. Luckily, a newer kernel worked fine. However, a newer kernel also breaks the NFS kernel server in Lenny. A short backport later, NFS was working fine, except annarchy (which NFS mounts from kemper) didn't have nfs-common installed at all, meaning it lacked mount.nfs. Ooops. Now, bugs was broken. It used an SSH tunnel from annarchy to kemper, but the startup script was nowhere to be found. I replaced it with a trivial stunnel setup which has the added advantage of reconnecting if the tunnel goes down. The ssh config had to be fixed slightly. We used to use an old and patched sshd that stored all the keys in a single file. I added a tiny script to split that again. We also had MkHomeDir in sshd's config, now replaced with pam_mkhomedir. Another interesting thing I learnt is that the iLO ssh daemon chucks you out if you try to send enviromental options to it. Like, LANG which is sent by default. Slightly confusing, but easy enough to fix once I knew what the problem was. In addition to kemper, I upgraded, but did not reboot fruit (the admin and LDAP host), due to not having the iLO password. I did not want to risk sitting there with a non-booting machine I could not fix. It's going to be rebooted at some later stage. I also did not have the iLO password for gabe, which runs mail and some other faff, so I'll have to schedule some more downtime in the near future.

31 January 2010

Axel Beckert: abe@debian.org

On Wednesday I got DAM approval and since Saturday late evening I m officially a Debian Developer. Yay! :-) My thanks go to As Bernd cited in his AM report, my earliest activity within the Debian community I can remember was organising the Debian booth at LinuxDay.lu 2003, where I installed Debian 3.0 Woody on my Hamilton Hamstation hy (a Sun SparcStation 4 clone). I wrote my first bugreport in November 2004 (#283365), probably during the Sarge BSP in Frankfurt. And my first Debian package was wikipedia2text, starting to package it August 2005 (ITP #325417). My only earlier documented interest in the Debian community is subscribing to the lists debian-apache@l.d.o and debian-emacsen@l.d.o in June 2002. I though remember that I started playing around with Debian 2.0 Hamm, skipping 2.1 (for whatever reasons, I can t remember), using 2.2 quite regularily and started to dive into with Woody which also ran on my first ThinkPad bijou . I installed it over WLAN with just a boot floppy at the Chemnitzer Linux-Tage. :-) Anyway, this has led to what it had to lead to a new Debian Developer. :-) The first package I uploaded with my newly granted rights was a new conkeror snapshot. This version should work out of the box on Ubuntu again, so that conkeror in Ubuntu should not lag that much behind Debian Sid anymore. In other News Since Wednesday I own a Nokia N900 and use it as my primary mobile phone now. Although it s not as free as the OpenMoko (see two other recent posts by Lucas Nussbaum and by Tollef Fog Heen on Planet Debian) it s definitely what I hoped the OpenMoko will once become. And even if I can t run Debian natively on the N900 (yet), it at least has a Debian chroot on it. :-) I'm going to FOSDEM, the Free and Open Source Software Developers' European Meeting A few weeks ago, I took over the organisation of this year s Debian booth at FOSDEM from Wouter Verhelst who s busy enough with FOSDEM organisation itself. Last Monday the organiser of the BSD DevRoom at FOSDEM asked on #mirbsd for talk suggestions and they somehow talked me into giving a talk about Debian GNU/kFreeBSD. The slides should show up during the next days on my Debian GNU/kFreeBSD talks page. I hope, I ll survive that talk despite giving more or less a talk saying Jehova! . ;-) What a week.

25 January 2010

Tollef Fog Heen: How free is the N900?

Lucas asks about how free the N900 is, whether he can download and recompile and reflash. I'll try to answer some of those questions. No, you can't download all the source. Part of it is just not open. I am not privy to Nokia's decisions on why or why not to open up, but it seems like the user interface bits are only partially open. Hildon itself is open so you can poke at widgets and see how those work. The address book is not open. The telepathy component that talks to the cellular modem is not open. As for having to accept EULAs, I honestly don't remember accepting one of those, but I'm not going to say there are none. There's at least one which is every time you install a package where you have to check a box saying "Yes, I know this package is third party and will not sue Nokia if it causes my house to burn down, my wife to divorce me or causes somebody to steal the car". It's annoying, but I'm willing to live with it. The contents of apt's sources.list is:
deb https://downloads.maemo.nokia.com/fremantle/ssu/apps/ ./ 
deb https://downloads.maemo.nokia.com/fremantle/ssu/mr0 ./ 
deb https://downloads.maemo.nokia.com/fremantle/ovi/ ./ 
deb http://repository.maemo.org/extras/ fremantle free non-free
deb http://repository.maemo.org/extras-devel/ fremantle free non-free
(technically, it comes from /etc/apt/sources.list.d/hildon-application-manager.list, not sources.list.) I believe the built-in applications are generally not free, so rebuilding everything that is free will for instance leave you without any address book UI, the built-in map application or camera. Sadly, the X driver is also proprietary, so you won't be able to see anything either. I don't think you can usefully install another free distro on the N900. You might be able to, at some point, assuming somebody goes to the effort. The last question is "- Besides the non-free telephony stack, are there any other antifeatures I should be aware of?". The telephony stack is implemented around Telepathy, which is LGPL-ed free software. While it's correct that telepathy-ring (which talks to the cellular modem), the call UI and most of the address book are proprietary, the rest of Telepathy is free. There are SIP and XMPP connection managers that are free, and you can install more connection managers for MSN, IRC and so on. Also, I think it's important to emphasise that the telephony stack does not contain any antifeatures. The closest thing you would be able to find is probably the restriction to one active and one held call at the same time, but as one of the developers said: "That's to prevent the UI from going mad". While I like to tout the N900 as a free phone, it is in no way completely free. Large parts of it are free, and almost as importantly: most of the programming interfaces are free and at least somewhat documented, so if somebody wants to replace the built-in camera application with a free one, they can replace the DBus interface that the camera app provides. Ditto for maps applications, the address book and so on.

17 January 2010

Tollef Fog Heen: Moving SMS-es and contacts from iphone to N900

I've been using an iphone since late 2007 as my primary phone and so I've gotten quite a few contacts and SMS conversations stored on it. Now that Collabora has given me a nice and shiny N900, I wanted to move my contacts and conversations over, but this proved to be a bit more work than expected. Please note that the following procedure worked for me, I have tried to take reasonable steps to prevent anything breaking, but if something breaks, you get to keep both pieces. I am not responsible and this comes with absolutely no warranty. Take backups. What you need The address book conversion script takes the SQLite database structure and converts that into a VCF file. It should be completely safe to run multiple times (it only does SELECT from the different tables in the contacts database, and you have made backups, haven't you?). If it dies with an "Unknown property", "Unknown label" or other error, you can poke it and see if you can work out what's wrong or drop me an email and I'll see if I can help you. Assuming it doesn't fall over, it will spit out a series of VCards, which you should store in a file, which you then to the N900 and open in the address book. Assuming you have less than 1000 contacts, they should now all be in your address book. If you have more, you need to split the file. A couple of known limitations: The procedure for exporting and importing SMS-es is a bit more involved. First, export the sms-es by running the perl script. It spits out a tab-separated file which you should copy to the N900 along with the smsimporter program from the smstools thread. Run ./smsimporter foo.csv and you should get all your SMS-es put into the conversation app. I ended up compiling my own smsimporter based on the 0.2.1 from the thread with the UUID patch too. Read the whole thread and it should be fairly clear.

15 December 2009

Tollef Fog Heen: N900 first impressions

Collabora was kind enough to buy N900s for all its employees. Yay! I got mine on Friday and has been playing around with it quite a bit. It's very shiny and the user experience is a lot better than the N810. There are a few graphical glitches, it seems it's XDamage damaging a bit of a window and it's just not quick enough to repaint. Not a problem, and it has far fewer instances of just hanging for half a second which my iPhone has. That is, it hasn't had any of those yet. The screen is good, but resistive. Takes a short while to get used to when you're used to capacative, but it's not a problem at all. The keyboard is good, but I need to map something as the compose key. Having US/UK key caps and using the Norwegian layout is a bit confusing. Not really the fault of the device though. The web browser is generally quite good. The gestures take a bit of time to get used to, but they're not hard as such. Some of the default "applications" are implemented as just links to the web pages of services like Twitter, which is a bit silly as you don't even get a version that's optimised for the N900. They're not useless, but they are absolutely nowhere near a real application. Also, the "Store" (Ovi Store) application/web page says "coming soon", which is quite odd. I'm not sure if I can change the selection of applications on the default application list, but modifying the desktop is easy. There seems to be few themes and background images available so far, at least in anything resembling official repositories. Hopefully this will improve over time. So far, I haven't actually written any code for the N900. I have some applications I want to write, mostly widget-style apps like "when does the next bus home leave from a bus stop close to me and where is the bus stop", but also some other ones. Battery life is not great. It almost did 48 hours today with a bit of use underway, and I did charge it before it ran completely out, but when I'm used to closer to a week, it's not that good. Camera seems good and is quite fast, I think it took less than five seconds from opening the camera shutter until I had taken a picture. Shutter delay is quite bad at about a third or half a second, but this is a mobile phone (or mobile computer, as Nokia likes to call it) and not a DSLR, so I'm quite happy with it. As a phone, it seems fine so far. I can make calls and accept calls and there's no noticeable problems with it. It also functions as a modem/DUN over bluetooth, which is quite useful. Build quality seems good, there's a good feeling when sliding the keyboard in and out, but only time will tell how good it actually is. So far, I'm happy with it, it's a big step up from my previous UK phone (which is a Nokia E70; my iPhone is a 2G phone so I can't use it here with the provider I'm using). Hopefully I'll post more happy stories about it in the days to come.

3 December 2009

Tollef Fog Heen: ekey happiness

In my last post about the ekey, I complained about two things: memory leak in the server and missing reconnects if the client was disconnected for any reason. I've meaning to blog about the follow up for while, but haven't had the time before now. Quite quickly after my blog post, Simtec engineers got in touch on IRC and we worked together to find out what the memory leak problem was. They also put in the reconnect support I asked for. All this in less than a week, for a device which only cost 36. To make things even better, they picked up some other small bug fixes/requests from me, such as making ekeyd-egd-linux just Suggest ekeyd and the latest release (1.1.1) seems to have fixed some more problems. All in all, I'm very happy about it. To make things even better, Ian Molton (of Collabora) has been busy fixing up virtio_rng in the kernel and adding EGD support (including reconnection support) to qemu and thereby KVM. Hopefully all this hits the next stable releases and I can retire my egd-over-stunnel hack.

5 November 2009

Tollef Fog Heen: Package workflow

As 3.0 format packages are now allowed into the archive, I am thinking about what I would like the workflow to look like and hoping one of them fits me. For new upstream releases, I am imaginging something like:
  1. New upstream version is released.
  2. git fetch + merge into upstream branch.
  3. Import tarballs, preferably in their original format (bz2/gzip), using pristine-tar.
  4. Merge upstream to debian branch. Do necessary fixups and adjustments. At this point, the upstream..debian branch delta is what I want to apply to the upstream release. The reason I need to apply this delta is so I get all generated files into the package that's built and uploaded.
  5. The source package has two functions at this point: Be a starting point for further hacking; and be the source that buildds use to build the binary Debian packages. For the former, I need the git repository itself. It is increasingly my preferred form of modification and so I consider it part of the source. For the latter, it might be easiest just to ship the orig.tar. gz,bz2 and the upstream..debian delta. This does require the upstream..debian delta not to change any generated files, which I think is a fair requirement.
I'm not actually sure which source format can give me this. I think maybe the 3.0 (git) format can, but I haven't played around with it enough to see. I also don't know if any tools actually support this workflow.

2 November 2009

Tollef Fog Heen: Distributing entropy

Back at the Debian barbeque party at the end of August, I got myself an EntropyKey from the kind folks at Simtec. It has been working so well that I haven't really had a big need to blog about it. Plug it in and watch /proc/sys/kernel/random/entropy_avail never empty. However, Collabora, where I am a sysadmin also got one. We are using a few virtual machines rather than physical machines as we want the security domains, but don't have any extreme performance needs. Like most VMs they have been starved from entropy. One problem presents itself: how do we get the entropy from the host system where the key is plugged in to the virtual machines? Kindly enough the ekeyd package also includes ekeyd-egd-linux which speaks EGD, the TCP protocol the Entropy Gathering Daemon defined a long time ago. ekeyd itself can also output in the same protocol, so this should be easy enough, or so you would think. Our VMs are all bridged together on the same network that is also exposed to the internet and the EGD protocol doesn't support any kind of encryption, so in order to be safe rather than sorry, I decided to encrypt the entropy. Some people think I'm mad for encrypting what is essentially random bits, but that's me for you. So, I ended up setting up stunnel, telling ekeyd on the host to listen to localhost on a given port, and stunnel to forward connections to that port. On each VM, I set up stunnel to forward connections from a given port on localhost to the port physical machine where stunnel is listening. ekeyd-linux-egd is then told to connect to the port on localhost where stunnel is listening. After a bit of certificate fiddling and such, I can do:
# pv -rb < /dev/random > /dev/null  
17.5kB [4.39kB/s]
which is way, way better than what you will get without a hardware RNG. The hardware itself seems to be delivering about 32kbit/s of entropy. My only gripes at this point is that the EGD implementation could use a little bit more work. It seems to leak memory in the EGD server implementation. Also, it would be very useful if the client would reconnect if it was disconnected for any reason. Even with those missing bits, I'm happy about the key so far.

Next.

Previous.